25 research outputs found
Spectral Embedding Norm: Looking Deep into the Spectrum of the Graph Laplacian
The extraction of clusters from a dataset which includes multiple clusters
and a significant background component is a non-trivial task of practical
importance. In image analysis this manifests for example in anomaly detection
and target detection. The traditional spectral clustering algorithm, which
relies on the leading eigenvectors to detect clusters, fails in such
cases. In this paper we propose the {\it spectral embedding norm} which sums
the squared values of the first normalized eigenvectors, where can be
significantly larger than . We prove that this quantity can be used to
separate clusters from the background in unbalanced settings, including extreme
cases such as outlier detection. The performance of the algorithm is not
sensitive to the choice of , and we demonstrate its application on synthetic
and real-world remote sensing and neuroimaging datasets
Implicit Graphon Neural Representation
Graphons are general and powerful models for generating graphs of varying
size. In this paper, we propose to directly model graphons using neural
networks, obtaining Implicit Graphon Neural Representation (IGNR). Existing
work in modeling and reconstructing graphons often approximates a target
graphon by a fixed resolution piece-wise constant representation. Our IGNR has
the benefit that it can represent graphons up to arbitrary resolutions, and
enables natural and efficient generation of arbitrary sized graphs with desired
structure once the model is learned. Furthermore, we allow the input graph data
to be unaligned and have different sizes by leveraging the Gromov-Wasserstein
distance. We first demonstrate the effectiveness of our model by showing its
superior performance on a graphon learning task. We then propose an extension
of IGNR that can be incorporated into an auto-encoder framework, and
demonstrate its good performance under a more general setting of graphon
learning. We also show that our model is suitable for graph representation
learning and graph generation.Comment: 3 figure
Evaluating Disentanglement in Generative Models Without Knowledge of Latent Factors
Probabilistic generative models provide a flexible and systematic framework
for learning the underlying geometry of data. However, model selection in this
setting is challenging, particularly when selecting for ill-defined qualities
such as disentanglement or interpretability. In this work, we address this gap
by introducing a method for ranking generative models based on the training
dynamics exhibited during learning. Inspired by recent theoretical
characterizations of disentanglement, our method does not require supervision
of the underlying latent factors. We evaluate our approach by demonstrating the
need for disentanglement metrics which do not require labels\textemdash the
underlying generative factors. We additionally demonstrate that our approach
correlates with baseline supervised methods for evaluating disentanglement.
Finally, we show that our method can be used as an unsupervised indicator for
downstream performance on reinforcement learning and fairness-classification
problems
The Numerical Stability of Hyperbolic Representation Learning
Given the exponential growth of the volume of the ball w.r.t. its radius, the
hyperbolic space is capable of embedding trees with arbitrarily small
distortion and hence has received wide attention for representing hierarchical
datasets. However, this exponential growth property comes at a price of
numerical instability such that training hyperbolic learning models will
sometimes lead to catastrophic NaN problems, encountering unrepresentable
values in floating point arithmetic. In this work, we carefully analyze the
limitation of two popular models for the hyperbolic space, namely, the
Poincar\'e ball and the Lorentz model. We first show that, under the 64 bit
arithmetic system, the Poincar\'e ball has a relatively larger capacity than
the Lorentz model for correctly representing points. Then, we theoretically
validate the superiority of the Lorentz model over the Poincar\'e ball from the
perspective of optimization. Given the numerical limitations of both models, we
identify one Euclidean parametrization of the hyperbolic space which can
alleviate these limitations. We further extend this Euclidean parametrization
to hyperbolic hyperplanes and exhibits its ability in improving the performance
of hyperbolic SVM